skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Erbin, Harold"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Signal detection is one of the main challenges in data science. As often happens in data analysis, the signal in the data may be corrupted by noise. There is a wide range of techniques that aim to extract the relevant degrees of freedom from data. However, some problems remain difficult. This is notably the case for signal detection in almost continuous spectra when the signal-to-noise ratio is small enough. This paper follows a recent bibliographic line, which tackles this issue with field-theoretical methods. Previous analysis focused on equilibrium Boltzmann distributions for an effective field representing the degrees of freedom of data. It was possible to establish a relation between signal detection and Z 2 -symmetry breaking. In this paper, we consider a stochastic field framework inspired by the so-called ‘model A’, and show that the ability to reach, or not reach, an equilibrium state is correlated with the shape of the dataset. In particular, by studying the renormalization group of the model, we show that the weak ergodicity prescription is always broken for signals that are small enough, when the data distribution is close to the Marchenko–Pastur law. This, in particular, enables the definition of a detection threshold in the regime where the signal-to-noise ratio is small enough. 
    more » « less
  2. A<sc>bstract</sc> The geometry of 4-string contact interaction of closed string field theory is characterized using machine learning. We obtain Strebel quadratic differentials on 4-punctured spheres as a neural network by performing unsupervised learning with a custom-built loss function. This allows us to solve for local coordinates and compute their associated mapping radii numerically. We also train a neural network distinguishing vertex from Feynman region. As a check, 4-tachyon contact term in the tachyon potential is computed and a good agreement with the results in the literature is observed. We argue that our algorithm is manifestly independent of number of punctures and scaling it to characterize the geometry ofn-string contact interaction is feasible. 
    more » « less
  3. A<sc>bstract</sc> We compute the gravitational action of a free massive Majorana fermion coupled to two-dimensional gravity on compact Riemann surfaces of arbitrary genus. The structure is similar to the case of the massive scalar. The small-mass expansion of the gravitational yields the Liouville action at zeroth order, and we can identify the Mabuchi action at first order. While the massive Majorana action is a conformal deformation of the massless Majorana CFT, we find an action different from the one given by the David-Distler-Kawai (DDK) ansatz. 
    more » « less
  4. Abstract We continue earlier efforts in computing the dimensions of tangent space cohomologies of Calabi–Yau manifolds using deep learning. In this paper, we consider the dataset of all Calabi–Yau four-folds constructed as complete intersections in products of projective spaces. Employing neural networks inspired by state-of-the-art computer vision architectures, we improve earlier benchmarks and demonstrate that all four non-trivial Hodge numbers can be learned at the same time using a multi-task architecture. With 30% (80%) training ratio, we reach an accuracy of 100% for h ( 1 , 1 ) and 97% for h ( 2 , 1 ) (100% for both), 81% (96%) for h ( 3 , 1 ) , and 49% (83%) for h ( 2 , 2 ) . Assuming that the Euler number is known, as it is easy to compute, and taking into account the linear constraint arising from index computations, we get 100% total accuracy. 
    more » « less